尽管被认为是计算的下一个前沿,但量子计算仍处于开发的早期阶段。实际上,当前的商业量子计算机遭受了一些关键的约束,例如嘈杂的过程和有限数量的量子数,以及影响量子算法性能的量子。尽管有这些局限性,研究人员仍在努力提出不同的框架,以有效使用这些嘈杂的中间尺度量子(NISQ)设备。这些过程之一是D'Wave Systems的量子量化器,可以通过将其转化为能量最小化问题来解决优化问题。在这种情况下,这项工作的重点是在解决现实世界中组合优化问题时提供有用的见解和信息。这项研究的主要动机是向非专家利益相关者开放一些量子计算前沿。为此,我们以参数敏感分析的形式进行了广泛的实验。该实验是使用旅行推销员问题作为基准测试问题进行的,并采用了两个Qubos:最先进的和一个启发式产生的。我们的分析已在单个7点的实例上进行,并且基于200多个不同的参数配置,包括3700多个单一统一运行和700万个量子读取。多亏了这项研究,已经获得了与能量分布和最合适的参数设置有关的发现。最后,进行了一项其他研究,旨在确定在进一步的TSP实例中启发式QUBO的效率。
translated by 谷歌翻译
转移优化被理解为求解器之间的信息交换,以提高其性能,从过去几年中获得了群体和进化计算界的显着关注。这个研究区域很年轻,但以快速的节奏成长,是在一天后一天扩大的文学语料库的核心。不可否认的是,将转移优化的概念配制在实地上。然而,最近的贡献和我们在这一领域的经验中观察到的证据证实,迄今为止没有正确解决有关关键方面。这种短期沟通旨在将读者归于对这些问题的反思思考,提供理由为什么他们保持未解决,并呼吁紧急行动充分克服它们。具体而言,我们强调了进化多任务优化的三个关键点,可以说是在文献中最积极地研究的转移优化中的范例:i)多任务优化概念的合理性; ii)依靠进化计算和群体智能的一些提出的多任务方法的赞誉新的新颖性;和III)用于评估新提出的多任务算法的方法。我们与这种批评的最终目的是在这三个有问题方面观察到的弱点,因此前瞻性工作可以避免在同一个石头上绊倒,最终在正确的方向上实现宝贵的进步。
translated by 谷歌翻译
Given the impact of language models on the field of Natural Language Processing, a number of Spanish encoder-only masked language models (aka BERTs) have been trained and released. These models were developed either within large projects using very large private corpora or by means of smaller scale academic efforts leveraging freely available data. In this paper we present a comprehensive head-to-head comparison of language models for Spanish with the following results: (i) Previously ignored multilingual models from large companies fare better than monolingual models, substantially changing the evaluation landscape of language models in Spanish; (ii) Results across the monolingual models are not conclusive, with supposedly smaller and inferior models performing competitively. Based on these empirical results, we argue for the need of more research to understand the factors underlying them. In this sense, the effect of corpus size, quality and pre-training techniques need to be further investigated to be able to obtain Spanish monolingual models significantly better than the multilingual ones released by large private companies, specially in the face of rapid ongoing progress in the field. The recent activity in the development of language technology for Spanish is to be welcomed, but our results show that building language models remains an open, resource-heavy problem which requires to marry resources (monetary and/or computational) with the best research expertise and practice.
translated by 谷歌翻译
大型预先接受的变压器的语言模型,如BERT大大改变了自然语言处理(NLP)字段。我们展示了对最近的工作的调查,这些工作使用这些大型语言模型通过预先训练,提示或文本生成方法来解决NLP任务。我们还提出了使用预先训练的语言模型来生成培训增强或其他目的的数据的方法。我们在讨论有关未来研究的局限性和建议方向的结论。
translated by 谷歌翻译
Semantic Textual Similarity (STS) measures the meaning similarity of sentences. Applications include machine translation (MT), summarization, generation, question answering (QA), short answer grading, semantic search, dialog and conversational systems. The STS shared task is a venue for assessing the current state-of-the-art. The 2017 task focuses on multilingual and cross-lingual pairs with one sub-track exploring MT quality estimation (MTQE) data. The task obtained strong participation from 31 teams, with 17 participating in all language tracks. We summarize performance and review a selection of well performing methods. Analysis highlights common errors, providing insight into the limitations of existing models. To support ongoing work on semantic representations, the STS Benchmark is introduced as a new shared training and evaluation set carefully selected from the corpus of English STS shared task data (2012-2017). 7 We use 50-dimensional GloVe word embeddings (Pennington et al., 2014) trained on a combination of Gigaword 5 (Parker et al., 2011) and English Wikipedia available at http://nlp.stanford.edu/projects/glove/.8 https://www.mturk.com/ 9 A designation that statistically identifies workers who perform high quality work across a diverse set of tasks.10 Spanish data from 2015 and 2014 uses a 5 point scale that collapses STS labels 4 and 3, removing the distinction between unimportant and important details.
translated by 谷歌翻译